Dancelets Mining for Video Recommendation Based on Dance Styles
https://gyazo.com/14a40b219bee10a8677f338ded05928b
Author
Abstract
Dance is a unique and meaningful type of human expression, composed of abundant and various action elements. However, existing methods based on associated texts and spatial visual features have difficulty in capturing the highly articulated motion patterns. To overcome this limitation, we propose to take advantage of the intrinsic motion information in dance videos to solve the video recommendation problem. We present a novel system that recommends dance videos based on a midlevel action representation, termed Dancelets. The Dancelets are used to bridge the semantic gap between video content and high-level concept, dance style, which plays a significant role in characterizing different types of dances. The proposed method executes automatic mining of dancelets with a concatenation of Normalized Cut clustering and Linear Discriminant Analysis. This ensures that the discovered dancelets are both representative and discriminative. Additionally, to exploit the motion cues in videos, we employ motion boundaries as saliency priors to generate volumes of interest and extract C3D features to capture spatiotemporal information from the mid-level patches. Extensive experiments validated on our proposed large dance dataset, HIT Dances dataset, demonstrate the effectiveness of the proposed methods for dance style-based video recommendation.
Source
IEEE Transactions on Multimedia ( Volume: 19 , Issue: 4 , April 2017 )
Comments
intrinsic motion componetns
we employ motion boundaries as saliency priors to generate volumes of interest and extract C3D features to capture spatiotemporal information from the mid-level patches
dance styleの識別
evaluation criteria: Mean Average Precision at N (MAP@N)
URL
Tag